277 research outputs found

    HMM-based Offline Recognition of Handwritten Words Crossed Out with Different Kinds of Strokes

    Get PDF
    In this work, we investigate the recognition of words that have been crossed-out by the writers and are thus degraded. The degradation consists of one or more ink strokes that span the whole word length and simulate the signs that writers use to cross out the words. The simulated strokes are superimposed to the original clean word images. We considered two types of strokes: wave-trajectory strokes created with splines curves and line-trajectory strokes generated with the delta-lognormal model of rapid line movements. The experiments have been performed using a recognition system based on hidden Markov models and the results show that the performance decrease is moderate for single writer data and light strokes, but severe for multiple writer data

    SAM: The School Attachment Monitor

    Get PDF
    Secure Attachment relationships have been shown to minimise social and behavioural problems in children and boosts resilience to risks later on such as antisocial behaviour, heart pathologies, and suicide. Attachment assessment is an expensive and time-consuming process that is not often performed. The School Attachment Monitor (SAM) automates Attachment assessment to support expert assessors. It uses doll-play activities with the dolls augmented with sensors and the child's play recorded with cameras to provide data for assessment. Social signal processing tools are then used to analyse the data and to automatically categorize Attachment patterns. This paper presents the current SAM interactive prototype

    Preface of the Proceedings of the Doctoral Consortium

    Get PDF
    This volume collects the contributions presented at the ACII 2009 Doctoral Consortium, the event aimed at gathering PhD students with the goal of sharing ideas about the theories behind affective computing; its development; and its application. Published papers have been selected out a large number of high quality submissions covering a wide spectrum of topics including the analysis of human-human, human-machine and human-robot interactions, the analysis of physiology and nonverbal behavior in affective phenomena, the effect of emotions on language and spoken interaction, and the embodiment of affective behaviors

    Deep Impression: Audiovisual Deep Residual Networks for Multimodal Apparent Personality Trait Recognition

    Full text link
    Here, we develop an audiovisual deep residual network for multimodal apparent personality trait recognition. The network is trained end-to-end for predicting the Big Five personality traits of people from their videos. That is, the network does not require any feature engineering or visual analysis such as face detection, face landmark alignment or facial expression recognition. Recently, the network won the third place in the ChaLearn First Impressions Challenge with a test accuracy of 0.9109

    Humanoid and android robots in the imaginary of adolescents, young adults and seniors

    Get PDF
    This paper investigates effects of participantsā€™ gender and age (adolescents, young adults, and seniors), robotsā€™ gender (male and female robots) and appearance (humanoid vs android) on robotsā€™ acceptance dimensions. The study involved 6 differently aged groups of participants (two adolescents, two young adults and two seniorsā€™ groups, for a total of 240 participants) requested to express their willingness to interact and their perception of robotsā€™ usefulness, pleasantness, appeal, and engagement for two different sets of females (Pepper, Erica, and Sophia) and male (Romeo, Albert, and Yuri) humanoid and android robots. Participants were also requested to express their preferred and attributed age ranges and occupations they entrusted to robots among healthcare, housework, protection and security and front office. Results show that neither the age nor participants and robotsā€™ gender, nor robotsā€™ human likeness univocally affected robotsā€™ acceptance by these differently aged users. Robotsā€™ acceptance appeared to be a nonlinear combination of all these factors

    Discriminative power of EEG-based biomarkers in major depressive disorder: A systematic review

    Get PDF
    Currently, the diagnosis of major depressive disorder (MDD) and its subtypes is mainly based on subjective assessments and self-reported measures. However, objective criteria as Electroencephalography (EEG) features would be helpful in detecting depressive states at early stages to prevent the worsening of the symptoms. Scientific community has widely investigated the effectiveness of EEG-based measures to discriminate between depressed and healthy subjects, with the aim to better understand the mechanisms behind the disorder and find biomarkers useful for diagnosis. This work offers a comprehensive review of the extant literature concerning the EEG-based biomarkers for MDD and its subtypes, and identify possible future directions for this line of research. Scopus, PubMed and Web of Science databases were researched following PRISMAā€™s guidelines. The initial papersā€™ screening was based on titles and abstracts; then full texts of the identified articles were examined, and a synthesis of findings was developed using tables and thematic analysis. After screening 1871 articles, 76 studies were identified as relevant and included in the systematic review. Reviewed markers include EEG frequency bands power, EEG asymmetry, ERP components, non-linear and functional connectivity measures. Results were discussed in relations to the different EEG measures assessed in the studies. Findings confirmed the effectiveness of those measures in discriminating between healthy and depressed subjects. However, the review highlights that the causal link between EEG measures and depressive subtypes needs to be further investigated and points out that some methodological issues need to be solved to enhance future research in this field

    Deep Structure Inference Network for Facial Action Unit Recognition

    Full text link
    Facial expressions are combinations of basic components called Action Units (AU). Recognizing AUs is key for developing general facial expression analysis. In recent years, most efforts in automatic AU recognition have been dedicated to learning combinations of local features and to exploiting correlations between Action Units. In this paper, we propose a deep neural architecture that tackles both problems by combining learned local and global features in its initial stages and replicating a message passing algorithm between classes similar to a graphical model inference approach in later stages. We show that by training the model end-to-end with increased supervision we improve state-of-the-art by 5.3% and 8.2% performance on BP4D and DISFA datasets, respectively
    • ā€¦
    corecore